en
AI Ranking
每月不到10元,就可以无限制地访问最好的AIbase。立即成为会员
Home
News
Daily Brief
Income Guide
Tutorial
Tools Directory
Product Library
en
AI Ranking
Search AI Products and News
Explore worldwide AI information, discover new AI opportunities
AI News
AI Tools
AI Cases
AI Tutorial
Type :
AI News
AI Tools
AI Cases
AI Tutorial
2024-08-21 17:58:08
.
AIbase
.
11.2k
Slack AI Exposed for Data Leak Vulnerability: Malicious Prompt Injection Can Steal Private Channel Information
2024-07-30 09:32:25
.
AIbase
.
10.7k
Embarrassing! Meta's AI Security System Easily Bypassed by 'Spaces' Attack
The Prompt-Guard-86M model released by Meta is designed to defend against prompt injection attacks by restricting large language models from processing inappropriate inputs, thereby protecting system security. However, the model itself also exposes risks of being attacked. Research conducted by Aman Priyanshu found that by adding simple character spacing such as spaces or removing punctuation in the input, the model disregards prior security instructions, achieving an almost 100% success rate for attacks. This finding highlights the importance of AI security, despite Prompt